本文通过讨论参加了为期三年的SubT竞赛的六支球队的不同大满贯策略和成果,报道了地下大满贯的现状。特别是,本文有四个主要目标。首先,我们审查团队采用的算法,架构和系统;特别重点是以激光雷达以激光雷达为中心的SLAM解决方案(几乎所有竞争中所有团队的首选方法),异质的多机器人操作(包括空中机器人和地面机器人)和现实世界的地下操作(从存在需要处理严格的计算约束的晦涩之处)。我们不会回避讨论不同SubT SLAM系统背后的肮脏细节,这些系统通常会从技术论文中省略。其次,我们通过强调当前的SLAM系统的可能性以及我们认为与一些良好的系统工程有关的范围来讨论该领域的成熟度。第三,我们概述了我们认为是基本的开放问题,这些问题可能需要进一步的研究才能突破。最后,我们提供了在SubT挑战和相关工作期间生产的开源SLAM实现和数据集的列表,并构成了研究人员和从业人员的有用资源。
translated by 谷歌翻译
我们提出了一种隐含的可能性方法,可以通过分散目录数据量化宇宙学信息,并作为图形组装。为此,我们使用模拟暗物质光环目录探索宇宙学的推断。我们采用最大化神经网络(IMNN)的信息来量化Fisher信息提取,这是图表的函数。我们a)在无噪声限制下,模块图结构对基础宇宙学具有高度敏感性,b)表明,通过比较传统统计,网络自动结合质量和聚类信息,c)证明图形神经网络仍然可以提取信息。当目录受到嘈杂的调查削减时,d)说明了如何将非线性IMNN摘要用作贝叶斯隐性可能性推断的渐近最佳压缩统计。我们在两点相关功能上,我们将$ \ omega_m,\ sigma_8 $参数约束降低了42倍,并证明网络自动组合质量和聚类信息,将关节$ \ omega_m,\ sigma_8 $参数约束减少42倍。 。这项工作利用了JAX中的图形数据的新IMNN实现,该实现可以利用数值或自动差异性。我们还显示,IMNNS成功地压缩了远离拟合网络的基准模型的模拟,这表明基于目录的分析中$ n $ point统计的有希望的替代方法。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
电子健康记录(EHR)是现代医疗系统的重要组成部分,影响医疗保健提供,运营和研究。尽管在EHR中进行了结构化信息,但非结构化的文本仍吸引了很多关注,并已成为一个令人兴奋的研究领域。最近的神经自然语言处理(NLP)方法的成功导致了处理非结构化临床笔记的新方向。在这项工作中,我们创建了一个用于临床文本的Python库,Ehrkit。该库包含两个主要部分:模拟III特定功能和任务特定功能。第一部分介绍了用于访问MIMIC-III NoteEvents数据的接口列表,包括基本搜索,信息检索和信息提取。第二部分集成了许多第三方库,用于多达12个删除NLP任务,例如命名实体识别,摘要,机器翻译等。
translated by 谷歌翻译
随着Covid-19影响每个国家的全球和改变日常生活,预测疾病的传播的能力比任何先前的流行病更重要。常规的疾病 - 展开建模方法,隔间模型,基于对病毒的扩散的时空均匀性的假设,这可能导致预测到欠低,特别是在高空间分辨率下。本文采用替代技术 - 时空机器学习方法。我们提出了Covid-LSTM,一种基于长期短期内存深度学习架构的数据驱动模型,用于预测Covid-19在美国县级的发病率。我们使用每周数量的新阳性案例作为时间输入,以及来自Facebook运动和连通数据集的手工工程空间特征,以捕捉时间和空间的疾病的传播。 Covid-LSTM在我们的17周的评估期间优于Covid-19预测集线器集合模型(CovidHub-Ensemble),使其首先比一个或多个预测期更准确的模型。在4周的预测地平线上,我们的型号平均每县平均50例比CovidHub-Ensemble更准确。我们强调,在Covid-19之前,在Covid-19之前的数据驱动预测的未充分利用疾病传播的预测可能是由于以前疾病缺乏足够的数据,除了最近的时尚预测方法的机器学习方法的进步。我们讨论了更广泛的数据驱动预测的障碍,以及将来将使用更多的基于学习的模型。
translated by 谷歌翻译
The recent increase in public and academic interest in preserving biodiversity has led to the growth of the field of conservation technology. This field involves designing and constructing tools that utilize technology to aid in the conservation of wildlife. In this article, we will use case studies to demonstrate the importance of designing conservation tools with human-wildlife interaction in mind and provide a framework for creating successful tools. These case studies include a range of complexities, from simple cat collars to machine learning and game theory methodologies. Our goal is to introduce and inform current and future researchers in the field of conservation technology and provide references for educating the next generation of conservation technologists. Conservation technology not only has the potential to benefit biodiversity but also has broader impacts on fields such as sustainability and environmental protection. By using innovative technologies to address conservation challenges, we can find more effective and efficient solutions to protect and preserve our planet's resources.
translated by 谷歌翻译
We address the problem of extracting key steps from unlabeled procedural videos, motivated by the potential of Augmented Reality (AR) headsets to revolutionize job training and performance. We decompose the problem into two steps: representation learning and key steps extraction. We employ self-supervised representation learning via a training strategy that adapts off-the-shelf video features using a temporal module. Training implements self-supervised learning losses involving multiple cues such as appearance, motion and pose trajectories extracted from videos to learn generalizable representations. Our method extracts key steps via a tunable algorithm that clusters the representations extracted from procedural videos. We quantitatively evaluate our approach with key step localization and also demonstrate the effectiveness of the extracted representations on related downstream tasks like phase classification. Qualitative results demonstrate that the extracted key steps are meaningful to succinctly represent the procedural tasks.
translated by 谷歌翻译
We introduce Argoverse 2 (AV2) - a collection of three datasets for perception and forecasting research in the self-driving domain. The annotated Sensor Dataset contains 1,000 sequences of multimodal data, encompassing high-resolution imagery from seven ring cameras, and two stereo cameras in addition to lidar point clouds, and 6-DOF map-aligned pose. Sequences contain 3D cuboid annotations for 26 object categories, all of which are sufficiently-sampled to support training and evaluation of 3D perception models. The Lidar Dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose. This dataset is the largest ever collection of lidar sensor data and supports self-supervised learning and the emerging task of point cloud forecasting. Finally, the Motion Forecasting Dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene. Models are tasked with the prediction of future motion for "scored actors" in each scenario and are provided with track histories that capture object location, heading, velocity, and category. In all three datasets, each scenario contains its own HD Map with 3D lane and crosswalk geometry - sourced from data captured in six distinct cities. We believe these datasets will support new and existing machine learning research problems in ways that existing datasets do not. All datasets are released under the CC BY-NC-SA 4.0 license.
translated by 谷歌翻译
In training neural networks, batch normalization has many benefits, not all of them entirely understood. But it also has some drawbacks. Foremost is arguably memory consumption, as computing the batch statistics requires all instances within the batch to be processed simultaneously, whereas without batch normalization it would be possible to process them one by one while accumulating the weight gradients. Another drawback is that that distribution parameters (mean and standard deviation) are unlike all other model parameters in that they are not trained using gradient descent but require special treatment, complicating implementation. In this paper, I show a simple and straightforward way to address these issues. The idea, in short, is to add terms to the loss that, for each activation, cause the minimization of the negative log likelihood of a Gaussian distribution that is used to normalize the activation. Among other benefits, this will hopefully contribute to the democratization of AI research by means of lowering the hardware requirements for training larger models.
translated by 谷歌翻译
In the era of noisy intermediate scale quantum devices, variational quantum circuits (VQCs) are currently one of the main strategies for building quantum machine learning models. These models are made up of a quantum part and a classical part. The quantum part is given by a parametrization $U$, which, in general, is obtained from the product of different quantum gates. By its turn, the classical part corresponds to an optimizer that updates the parameters of $U$ in order to minimize a cost function $C$. However, despite the many applications of VQCs, there are still questions to be answered, such as for example: What is the best sequence of gates to be used? How to optimize their parameters? Which cost function to use? How the architecture of the quantum chips influences the final results? In this article, we focus on answering the last question. We will show that, in general, the cost function will tend to a typical average value the closer the parameterization used is from a $2$-design. Therefore, the closer this parameterization is to a $2$-design, the less the result of the quantum neural network model will depend on its parametrization. As a consequence, we can use the own architecture of the quantum chips to defined the VQC parametrization, avoiding the use of additional swap gates and thus diminishing the VQC depth and the associated errors.
translated by 谷歌翻译